神经影像技术的进步为我们提供了了解人类思维方式的新颖见解。功能磁共振成像(fMRI)是最流行和广泛使用的神经影像学技术,并且对基于fMRI的个体差异标记越来越感兴趣。但是,由于其高成本和从包括儿童和婴儿在内的特定人群获得的难度,其效用通常受到限制。 fMRI标记的替代标记或神经相关性将具有重要的实际含义,但是我们对fMRI标记的独立预测指标很少。在这里,使用机器学习(ML)模型和数据增强,我们从功能性近红外光谱学(FNIRS)的多元模式(一种便携式且相对便宜的光学神经图像技术)中预测了人类认知的良好fMRI标记。我们招募了50名人类参与者,他们执行了两项认知任务(停止信号任务和概率逆转学习任务),而在总共两次访问中的每个访问中,用FNIRS或fMRI测量了神经激活。使用ML模型和数据增强,我们可以预测来自前额叶皮层中48通道FNIRS激活的响应抑制或预测误差信号的良好fMRI标记。这些结果表明,FNIRS可能会提供fMRI激活的替代标记,这将扩大我们对包括婴儿在内的各种人群的理解。
translated by 谷歌翻译
Universal Domain Adaptation aims to transfer the knowledge between the datasets by handling two shifts: domain-shift and category-shift. The main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target. Most existing methods approach this problem by first training the target adapted known classifier and then relying on the single threshold to distinguish unknown target samples. However, this simple threshold-based approach prevents the model from considering the underlying complexities existing between the known and unknown samples in the high-dimensional feature space. In this paper, we propose a new approach in which we use two sets of feature points, namely dual Classifiers for Prototypes and Reciprocals (CPR). Our key idea is to associate each prototype with corresponding known class features while pushing the reciprocals apart from these prototypes to locate them in the potential unknown feature space. The target samples are then classified as unknown if they fall near any reciprocals at test time. To successfully train our framework, we collect the partial, confident target samples that are classified as known or unknown through on our proposed multi-criteria selection. We then additionally apply the entropy loss regularization to them. For further adaptation, we also apply standard consistency regularization that matches the predictions of two different views of the input to make more compact target feature space. We evaluate our proposal, CPR, on three standard benchmarks and achieve comparable or new state-of-the-art results. We also provide extensive ablation experiments to verify our main design choices in our framework.
translated by 谷歌翻译
Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric. Observing a strong correlation between the ground truth search metric and self-supervised losses, we introduce self-supervised AutoFlow to handle real-world videos without ground truth labels. Using self-supervised loss as the search metric, our self-supervised AutoFlow performs on par with AutoFlow on Sintel and KITTI where ground truth is available, and performs better on the real-world DAVIS dataset. We further explore using self-supervised AutoFlow in the (semi-)supervised setting and obtain competitive results against the state of the art.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
几乎没有弹药对话状态跟踪(DST)模型,即使使用少量数据,也具有可靠准确性的用户请求。在本文中,我们介绍了一个无本体的几杆DST,并具有自我喂养的信念状态输入。自我喂养的信念状态输入通过总结以前的对话来提高多转向对话的准确性。另外,我们新制定了一个插槽辅助任务。这项新的辅助任务有助于分类对话中是否提到了一个插槽。我们的模型在Multiwoz 2.0上的四个域中获得了几次射门设置的最佳分数。
translated by 谷歌翻译
在本文中,我们提出了一个基于树张量网状状态的密度估计框架。所提出的方法包括使用Chow-Liu算法确定树拓扑,并获得线性系统通过草图技术定义张量 - 网络组件的线性系统。开发了草图功能的新颖选择,以考虑包含循环的图形模型。提供样品复杂性保证,并通过数值实验进一步证实。
translated by 谷歌翻译
在本文中,我们通过添加Laplacian Pyramid(LP)概念来开发Laplacian类似于类似的自动编码器(LPAE),以广泛用于分析信号处理中的图像。LPAE将图像分解为近似图像和编码器部分中的详细图像,然后尝试使用两个组件在解码器部分中重建原始图像。我们使用LPAE进行分类和超分辨率领域的实验。使用详细图像和较小尺寸的近似图像作为分类网络的输入,我们的LPAE使模型更轻。此外,我们表明连接分类网络的性能仍然很高。在超分辨率区域中,我们表明解码器部分通过设置类似于LP的结构来获得高质量的重建图像。因此,LPAE通过组合自动编码器的解码器和超分辨率网络来改善原始结果。
translated by 谷歌翻译
尽管电子保健记录(EHR)丰富,但其异质性限制了医疗数据在构建预测模型中的利用。为了应对这一挑战,我们提出了通用医疗预测框架(UNIHPF),该框架不需要医疗领域知识和对多个预测任务的最小预处理。实验结果表明,UNIHPF能够构建可以从不同EHR系统处理任何形式的医疗数据的大规模EHR模型。我们的框架在多源学习任务(包括转移和汇总学习)中大大优于基线模型,同时在单个医疗数据集中接受培训时也会显示出可比的结果。为了凭经验证明我们工作的功效,我们使用各种数据集,模型结构和任务进行了广泛的实验。我们认为,我们的发现可以为对EHR的多源学习提供进一步研究提供有益的见解。
translated by 谷歌翻译
开放的复合域适应(OCDA)将目标域视为多个未知同质子域的化合物。 OCDA的目的是最大程度地减少标记的源域和未标记的复合目标域之间的域间隙,这使对未见域的模型概括有益。当前用于语义分割方法的OCDA采用手动域分离,并采用单个模型同时适应所有目标子域。但是,适应目标子域可能会阻碍该模型适应其他不同目标子域,从而导致性能有限。在这项工作中,我们引入了一个带有双向光度混合的多教学框架,以分别适应每个目标子域。首先,我们提出一个自动域分离,以找到最佳的子域数。在此基础上,我们提出了一个多教学框架,在该框架中,每个教师模型都使用双向光度混合来适应一个目标子域。此外,我们进行自适应蒸馏以学习学生模型并应用一致性正规化以改善学生的概括。基准数据集上的实验结果显示了针对复合域和开放域对现有最新方法的拟议方法的功效。
translated by 谷歌翻译
Stylegan最近的成功表明,预训练的Stylegan潜在空间对现实的视频生成很有用。但是,由于难以确定stylegan潜在空间的方向和幅度,因此视频中产生的运动通常在语义上没有意义。在本文中,我们提出了一个框架来通过利用多模式(声音图像文本)嵌入空间来生成现实视频。由于声音提供了场景的时间上下文,因此我们的框架学会了生成与声音一致的视频。首先,我们的声音反演模块将音频直接映射到Stylegan潜在空间中。然后,我们结合了基于夹子的多模式嵌入空间,以进一步提供视听关系。最后,提出的帧发电机学会在潜在空间中找到轨迹,该空间与相应的声音相干,并以层次结构方式生成视频。我们为声音引导的视频生成任务提供新的高分辨率景观视频数据集(视听对)。实验表明,我们的模型在视频质量方面优于最新方法。我们进一步显示了几种应用程序,包括图像和视频编辑,以验证我们方法的有效性。
translated by 谷歌翻译